Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 473
Filtrar
1.
ALTEX ; 41(2): 152-178, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38579692

RESUMO

Developmental neurotoxicity (DNT) testing has seen enormous progress over the last two decades. Preceding even the publication of the animal-based OECD test guideline for DNT testing in 2007, a series of non-animal technology workshops and conferences (starting in 2005) shaped a community that has delivered a comprehensive battery of in vitro test methods (IVB). Its data interpretation is covered by a very recent OECD test guidance (No. 377). Here, we aim to overview the progress in the field, focusing on the evolution of testing strategies, the role of emerging technologies, and the impact of OECD test guidelines on DNT testing. In particular, this is an example of a targeted development of an animal-free testing approach for one of the most complex hazards of chemicals to human health. These developments started literally from a blank slate, with no proposed alternative methods available. Over two decades, cutting-edge science enabled the design of a testing approach that spares animals and enables throughput for this challenging hazard. While it is evident that the field needs guidance and regulation, the massive economic impact of decreased human cognitive capacity caused by chemical exposure should be prioritized more highly. Beyond this, the claim to fame of DNT in vitro testing is the enormous scientific progress it has brought for understanding the human brain, its development, and how it can be perturbed.


Developmental neurotoxicity (DNT) testing predicts the hazard of exposure to chemicals to human brain development. Comprehensive advanced non-animal testing strategies using cutting-edge technology can now replace animal-based approaches to assess this complex hazard. These strategies can assess large numbers of chemicals more accurately and efficiently than the animal-based approach. Recent OECD test guidance has formalized this battery of in vitro test methods for DNT, marking a pivotal achievement in the field. The shift towards non-animal testing reflects both a commitment to animal welfare and a growing recognition of the economic and public health impacts associated with impaired cognitive function caused by chemical exposures. These innovations ultimately contribute to safer chemical management and better protection of human health, especially during the vulnerable stages of brain development.


Assuntos
Síndromes Neurotóxicas , Testes de Toxicidade , Animais , Humanos , Síndromes Neurotóxicas/etiologia , Modelos Animais , Alternativas aos Testes com Animais
2.
ALTEX ; 41(2): 179-201, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38629803

RESUMO

When The Principles of Humane Experimental Technique was published in 1959, authors William Russell and Rex Burch had a modest goal: to make researchers think about what they were doing in the laboratory - and to do it more humanely. Sixty years later, their groundbreaking book was celebrated for inspiring a revolution in science and launching a new field: The 3Rs of alternatives to animal experimentation. On November 22, 2019, some pioneering and leading scientists and researchers in the field gathered at the Johns Hopkins Bloomberg School of Public Health in Bal-timore for the 60 Years of the 3Rs Symposium: Lessons Learned and the Road Ahead. The event was sponsored by the Johns Hopkins Center for Alternatives to Animal Testing (CAAT), the Foundation for Chemistry Research and Initiatives, the Alternative Research & Development Foundation (ARDF), the American Cleaning Institute (ACI), the International Fragrance Association (IFRA), the Institute for In Vitro Sciences (IIVS), John "Jack" R. Fowle III, and the Society of Toxicology (SoT). Fourteen pres-entations shared the history behind the groundbreaking publication, international efforts to achieve its aims, stumbling blocks to progress, as well as remarkable achievements. The day was a tribute to Russell and Burch, and a testament to what is possible when people from many walks of life - science, government, and industry - work toward a common goal.


William Russell and Rex Burch published their book The Principles of Humane Experimental Technique in 1959. The book encouraged researchers to replace animal experiments where it was possible, to refine experiments with animals in order to reduce their suffering, and to reduce the number of animals that had to be used for experiments to the minimum. Sixty years later, a group of pioneering and leading scientists and researchers in the field gathered to share how the publi­cation came about and how the vision inspired international collaborations and successes on many different levels including new laws. The paper includes an overview of important milestones in the history of alternatives to animal experimentation.


Assuntos
Experimentação Animal , Alternativas aos Testes com Animais , Animais , Humanos , Alternativas aos Testes com Animais/métodos , Projetos de Pesquisa , Indústrias , Bem-Estar do Animal
3.
Environ Sci Technol ; 58(12): 5267-5278, 2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38478874

RESUMO

Tetrabromobisphenol A (TBBPA), the most extensively utilized brominated flame retardant, has raised growing concerns regarding its environmental and health risks. Neurovascular formation is essential for metabolically supporting neuronal networks. However, previous studies primarily concerned the neuronal injuries of TBBPA, its impact on the neurovascularture, and molecular mechanism, which are yet to be elucidated. In this study, 5, 30, 100, 300 µg/L of TBBPA were administered to Tg (fli1a: eGFP) zebrafish larvae at 2-72 h postfertilization (hpf). The findings revealed that TBBPA impaired cerebral and ocular angiogenesis in zebrafish. Metabolomics analysis showed that TBBPA-treated neuroendothelial cells exhibited disruption of the TCA cycle and the Warburg effect pathway. TBBPA induced a significant reduction in glycolysis and mitochondrial ATP production rates, accompanied by mitochondrial fragmentation and an increase in mitochondrial reactive oxygen species (mitoROS) production in neuroendothelial cells. The supplementation of alpha-ketoglutaric acid, a key metabolite of the TCA cycle, mitigated TBBPA-induced mitochondrial damage, reduced mitoROS production, and restored angiogenesis in zebrafish larvae. Our results suggested that TBBPA exposure impeded neurovascular injury via mitochondrial metabolic perturbation mediated by mitoROS signaling, providing novel insight into the neurovascular toxicity and mode of action of TBBPA.


Assuntos
Retardadores de Chama , Bifenil Polibromatos , Animais , Humanos , Peixe-Zebra , Células Endoteliais/metabolismo , Bifenil Polibromatos/toxicidade , Larva/metabolismo , Retardadores de Chama/toxicidade
4.
Metabolites ; 14(2)2024 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-38392990

RESUMO

Metabolomics is emerging as a powerful systems biology approach for improving preclinical drug safety assessment. This review discusses current applications and future trends of metabolomics in toxicology and drug development. Metabolomics can elucidate adverse outcome pathways by detecting endogenous biochemical alterations underlying toxicity mechanisms. Furthermore, metabolomics enables better characterization of human environmental exposures and their influence on disease pathogenesis. Metabolomics approaches are being increasingly incorporated into toxicology studies and safety pharmacology evaluations to gain mechanistic insights and identify early biomarkers of toxicity. However, realizing the full potential of metabolomics in regulatory decision making requires a robust demonstration of reliability through quality assurance practices, reference materials, and interlaboratory studies. Overall, metabolomics shows great promise in strengthening the mechanistic understanding of toxicity, enhancing routine safety screening, and transforming exposure and risk assessment paradigms. Integration of metabolomics with computational, in vitro, and personalized medicine innovations will shape future applications in predictive toxicology.

5.
Arch Toxicol ; 98(3): 735-754, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38244040

RESUMO

The rapid progress of AI impacts diverse scientific disciplines, including toxicology, and has the potential to transform chemical safety evaluation. Toxicology has evolved from an empirical science focused on observing apical outcomes of chemical exposure, to a data-rich field ripe for AI integration. The volume, variety and velocity of toxicological data from legacy studies, literature, high-throughput assays, sensor technologies and omics approaches create opportunities but also complexities that AI can help address. In particular, machine learning is well suited to handle and integrate large, heterogeneous datasets that are both structured and unstructured-a key challenge in modern toxicology. AI methods like deep neural networks, large language models, and natural language processing have successfully predicted toxicity endpoints, analyzed high-throughput data, extracted facts from literature, and generated synthetic data. Beyond automating data capture, analysis, and prediction, AI techniques show promise for accelerating quantitative risk assessment by providing probabilistic outputs to capture uncertainties. AI also enables explanation methods to unravel mechanisms and increase trust in modeled predictions. However, issues like model interpretability, data biases, and transparency currently limit regulatory endorsement of AI. Multidisciplinary collaboration is needed to ensure development of interpretable, robust, and human-centered AI systems. Rather than just automating human tasks at scale, transformative AI can catalyze innovation in how evidence is gathered, data are generated, hypotheses are formed and tested, and tasks are performed to usher new paradigms in chemical safety assessment. Used judiciously, AI has immense potential to advance toxicology into a more predictive, mechanism-based, and evidence-integrated scientific discipline to better safeguard human and environmental wellbeing across diverse populations.


Assuntos
Inteligência Artificial , Segurança Química , Humanos , Redes Neurais de Computação , Aprendizado de Máquina , Catálise
6.
Adv Healthc Mater ; : e2302745, 2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38252094

RESUMO

Brain organoids are 3D in vitro culture systems derived from human pluripotent stem cells that self-organize to model features of the (developing) human brain. This review examines the techniques behind organoid generation, their current and potential applications, and future directions for the field. Brain organoids possess complex architecture containing various neural cell types, synapses, and myelination. They have been utilized for toxicology testing, disease modeling, infection studies, personalized medicine, and gene-environment interaction studies. An emerging concept termed Organoid Intelligence (OI) combines organoids with artificial intelligence systems to generate learning and memory, with the goals of modeling cognition and enabling biological computing applications. Brain organoids allow neuroscience studies not previously achievable with traditional techniques, and have the potential to transform disease modeling, drug development, and the understanding of human brain development and disorders. The aspirational vision of OI parallels the origins of artificial intelligence, and efforts are underway to map a roadmap toward its realization. In summary, brain organoids constitute a disruptive technology that is rapidly advancing and gaining traction across multiple disciplines.

7.
ALTEX ; 41(2): 273-281, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38215352

RESUMO

Both because of the shortcomings of existing risk assessment methodologies, as well as newly available tools to predict hazard and risk with machine learning approaches, there has been an emerging emphasis on probabilistic risk assessment. Increasingly sophisticated AI models can be applied to a plethora of exposure and hazard data to obtain not only predictions for particular endpoints but also to estimate the uncertainty of the risk assessment outcome. This provides the basis for a shift from deterministic to more probabilistic approaches but comes at the cost of an increased complexity of the process as it requires more resources and human expertise. There are still challenges to overcome before a probabilistic paradigm is fully embraced by regulators. Based on an earlier white paper (Maertens et al., 2022), a workshop discussed the prospects, challenges and path forward for implementing such AI-based probabilistic hazard assessment. Moving forward, we will see the transition from categorized into probabilistic and dose-dependent hazard outcomes, the application of internal thresholds of toxicological concern for data-poor substances, the acknowledgement of user-friendly open-source software, a rise in the expertise of toxicologists required to understand and interpret artificial intelligence models, and the honest communication of uncertainty in risk assessment to the public.


Probabilistic risk assessment, initially from engineering, is applied in toxicology to understand chemical-related hazards and their consequences. In toxicology, uncertainties abound ­ unclear molecular events, varied proposed outcomes, and population-level assessments for issues like neurodevelopmental disorders. Establishing links between chemical exposures and diseases, especially rare events like birth defects, often demands extensive studies. Existing methods struggle with subtle effects or those affecting specific groups. Future risk assessments must address developmental disease origins, presenting challenges beyond current capabilities. The intricate nature of many toxicological processes, lack of consensus on mechanisms and outcomes, and the need for nuanced population-level assessments highlight the complexities in understanding and quantifying risks associated with chemical exposures in the field of toxicology.


Assuntos
Inteligência Artificial , Toxicologia , Animais , Humanos , Alternativas aos Testes com Animais , Medição de Risco/métodos , Incerteza , Toxicologia/métodos
8.
ALTEX ; 41(1): 3-19, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38194639

RESUMO

Green toxicology is marching chemistry into the 21st century. This emerging framework will transform how chemical safety is evaluated by incorporating evaluation of the hazards, exposures, and risks associated with chemicals into early product development in a way that minimizes adverse impacts on human and environmental health. The goal is to minimize toxic threats across entire supply chains through smarter designs and policies. Traditional animal testing methods are replaced by faster, cutting-edge innovations like organs-on-chips and artificial intelligence predictive models that are also more cost-effective. Core principles of green toxicology include utilizing alternative test methods, applying the precautionary principle, considering lifetime impacts, and emphasizing risk prevention over reaction. This paper provides an overview of these foundational concepts and describes current initiatives and future opportunities to advance the adoption of green toxicology approaches. Chal-lenges and limitations are also discussed. Green shoots are emerging with governments offering carrots like the European Green Deal to nudge industry. Noteworthy, animal rights and environ-mental groups have different ideas about the needs for testing and their consequences for animal use. Green toxicology represents the way forward to support both these societal needs with sufficient throughput and human relevance for hazard information and minimal animal suffering. Green toxi-cology thus sets the stage to synergize human health and ecological values. Overall, the integration of green chemistry and toxicology has potential to profoundly shift how chemical risks are evaluated and managed to achieve safety goals in a more ethical, ecologically-conscious manner.


Green toxicology aims to make chemicals safer by design. It focuses on preventing toxicity issues early during development instead of testing after products are developed. Green toxicology uses modern non-animal methods like computer models and lab tests with human cells to predict if chem­icals could be hazardous. Benefits are faster results, lower costs, and less animal testing. The principles of green toxicology include using alternative tests, applying caution even with uncertain data, con­sidering lifetime impacts across global supply chains, and emphasizing prevention over reaction. The article highlights European and US policy efforts to spur sustainable chemistry innovation which will necessitate greener approaches to assess new materials and drive adoption. Overall, green toxi­cology seeks to integrate safer design concepts so that human and environmental health are valued equally with functionality and profit. This alignment promises safer, ethical products but faces chal­lenges around validating new methods and overcoming institutional resistance to change.


Assuntos
Inteligência Artificial , Segurança Química , Animais , Humanos , Alternativas aos Testes com Animais , Saúde Ambiental , Indústrias
9.
Altern Lab Anim ; 52(2): 117-131, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38235727

RESUMO

The first Stakeholder Network Meeting of the EU Horizon 2020-funded ONTOX project was held on 13-14 March 2023, in Brussels, Belgium. The discussion centred around identifying specific challenges, barriers and drivers in relation to the implementation of non-animal new approach methodologies (NAMs) and probabilistic risk assessment (PRA), in order to help address the issues and rank them according to their associated level of difficulty. ONTOX aims to advance the assessment of chemical risk to humans, without the use of animal testing, by developing non-animal NAMs and PRA in line with 21st century toxicity testing principles. Stakeholder groups (regulatory authorities, companies, academia, non-governmental organisations) were identified and invited to participate in a meeting and a survey, by which their current position in relation to the implementation of NAMs and PRA was ascertained, as well as specific challenges and drivers highlighted. The survey analysis revealed areas of agreement and disagreement among stakeholders on topics such as capacity building, sustainability, regulatory acceptance, validation of adverse outcome pathways, acceptance of artificial intelligence (AI) in risk assessment, and guaranteeing consumer safety. The stakeholder network meeting resulted in the identification of barriers, drivers and specific challenges that need to be addressed. Breakout groups discussed topics such as hazard versus risk assessment, future reliance on AI and machine learning, regulatory requirements for industry and sustainability of the ONTOX Hub platform. The outputs from these discussions provided insights for overcoming barriers and leveraging drivers for implementing NAMs and PRA. It was concluded that there is a continued need for stakeholder engagement, including the organisation of a 'hackathon' to tackle challenges, to ensure the successful implementation of NAMs and PRA in chemical risk assessment.


Assuntos
Rotas de Resultados Adversos , Inteligência Artificial , Animais , Humanos , Testes de Toxicidade , Medição de Risco , Bélgica
10.
ALTEX ; 2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-38043132

RESUMO

Historical data from control groups in animal toxicity studies is currently mainly used for comparative purposes to assess validity and robustness of study results. Due to the highly controlled environment in which the studies are performed and the homogeneity of the animal collectives it has been proposed to use the historical data for building so-called virtual control groups, which could replace partly or entirely the concurrent control. This would constitute a substantial contribution to the reduction of animal use in safety studies. Before the concept can be implemented, the prerequisites regarding data collection, curation and statistical evaluation together with a validation strategy need to be identified to avoid any impairment of the study outcome and subsequent consequences for human risk assessment. To further assess and develop the concept of virtual control groups the transatlantic think tank for toxicology (t4) sponsored a workshop with stakeholders from the pharmaceutical and chemical industry, academia, FDA, pharmaceutical, contract research organizations (CROs), and non-governmental organizations in Washington, which took place in March 2023. This report summarizes the current efforts of a European initiative to share, collect and curate animal control data in a centralized database and the first approaches to identify optimal matching criteria between virtual controls and the treatment arms of a study as well as first reflections about strategies for a qualification procedure and potential pitfalls of the concept.


Animal safety studies are usually performed with three groups of animals where increasing amounts of the test chemical are given to the animals and one control group where the animals do not receive the test chemical. The design of such studies, the characteristics of the animals, and the measured parameters are often very similar from study to study. Therefore, it has been suggested that measurement data from the control groups could be reused from study to study to lower the total number of animals per study. This could reduce animal use by up to 25% for such standardized studies. A workshop was held to discuss the pros and cons of such a concept and what would have to be done to implement it without threatening the reliability of the study outcome or the resulting human risk assessment.

11.
Front Toxicol ; 5: 1216802, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37908592

RESUMO

Introduction: The positive identification of xenobiotics and their metabolites in human biosamples is an integral aspect of exposomics research, yet challenges in compound annotation and identification continue to limit the feasibility of comprehensive identification of total chemical exposure. Nonetheless, the adoption of in silico tools such as metabolite prediction software, QSAR-ready structural conversion workflows, and molecular standards databases can aid in identifying novel compounds in untargeted mass spectral investigations, permitting the assessment of a more expansive pool of compounds for human health hazard. This strategy is particularly applicable when it comes to flame retardant chemicals. The population is ubiquitously exposed to flame retardants, and evidence implicates some of these compounds as developmental neurotoxicants, endocrine disruptors, reproductive toxicants, immunotoxicants, and carcinogens. However, many flame retardants are poorly characterized, have not been linked to a definitive mode of toxic action, and are known to share metabolic breakdown products which may themselves harbor toxicity. As U.S. regulatory bodies begin to pursue a subclass- based risk assessment of organohalogen flame retardants, little consideration has been paid to the role of potentially toxic metabolites, or to expanding the identification of parent flame retardants and their metabolic breakdown products in human biosamples to better inform the human health hazards imposed by these compounds. Methods: The purpose of this study is to utilize publicly available in silico tools to 1) characterize the structural and metabolic fates of proposed flame retardant classes, 2) predict first pass metabolites, 3) ascertain whether metabolic products segregate among parent flame retardant classification patterns, and 4) assess the existing coverage in of these compounds in mass spectral database. Results: We found that flame retardant classes as currently defined by the National Academies of Science, Engineering and Medicine (NASEM) are structurally diverse, with highly variable predicted pharmacokinetic properties and metabolic fates among member compounds. The vast majority of flame retardants (96%) and their predicted metabolites (99%) are not present in spectral databases, posing a challenge for identifying these compounds in human biosamples. However, we also demonstrate the utility of publicly available in silico methods in generating a fit for purpose synthetic spectral library for flame retardants and their metabolites that have yet to be identified in human biosamples. Discussion: In conclusion, exposomics studies making use of fit-for-purpose synthetic spectral databases will better resolve internal exposure and windows of vulnerability associated with complex exposures to flame retardant chemicals and perturbed neurodevelopmental, reproductive, and other associated apical human health impacts.

12.
Front Artif Intell ; 6: 1269932, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37915539

RESUMO

The rapid progress of AI impacts various areas of life, including toxicology, and promises a major role for AI in future risk assessments. Toxicology has shifted from a purely empirical science focused on observing chemical exposure outcomes to a data-rich field ripe for AI integration. AI methods are well-suited to handling and integrating large, diverse data volumes - a key challenge in modern toxicology. Additionally, AI enables Predictive Toxicology, as demonstrated by the automated read-across tool RASAR that achieved 87% balanced accuracy across nine OECD tests and 190,000 chemicals, outperforming animal test reproducibility. AI's ability to handle big data and provide probabilistic outputs facilitates probabilistic risk assessment. Rather than just replicating human skills at larger scales, AI should be viewed as a transformative technology. Despite potential challenges, like model black-boxing and dataset biases, explainable AI (xAI) is emerging to address these issues.

13.
ALTEX ; 40(4): 559-570, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37889187

RESUMO

Toxicology has undergone a transformation from an observational science to a data-rich discipline ripe for artificial intelligence (AI) integration. The exponential growth in computing power coupled with accumulation of large toxicological datasets has created new opportunities to apply techniques like machine learning and especially deep learning to enhance chemical hazard assessment. This article provides an overview of key developments in AI-enabled toxicology, including early expert systems, statistical learning methods like quantitative structure-activity relationships (QSARs), recent advances with deep neural networks, and emerging trends. The promises and challenges of AI adoption for predictive toxicology, data analysis, risk assessment, and mechanistic research are discussed. Responsible development and application of interpretable and human-centered AI tools through multidisciplinary collaboration can accelerate evidence-based toxicology to better protect human health and the environment. However, AI is not a panacea and must be thoughtfully designed and utilized alongside ongoing efforts to improve primary evidence generation and appraisal.


Assuntos
Alternativas aos Testes com Animais , Inteligência Artificial , Humanos , Animais , Aprendizado de Máquina
14.
ALTEX ; 40(4): 706-712, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37889190

RESUMO

Every test procedure, scientific and non-scientific, has inherent uncertainties, even when performed according to a standard operating procedure (SOP). In addition, it is prone to errors, defects, and mistakes introduced by operators, laboratory equipment, or materials used. Adherence to an SOP and comprehensive validation of the test method cannot guarantee that each test run produces data within the acceptable range of variability and with the precision and accuracy determined during the method validation. We illustrate here (part I) why controlling the validity of each test run is an important element of experimental design. The definition and application of acceptance criteria (AC) for the validity of test runs is important for the setup and use of test methods, particularly for the use of new approach methods (NAM) in toxicity testing. AC can be used for decision rules on how to handle data, e.g., to accept the data for further use (AC fulfilled) or to reject the data (AC not fulfilled). The adherence to AC has important requirements and consequences that may seem surprising at first sight: (i) AC depend on a test method's objectives, e.g., on the types/concentrations of chemicals tested, the regulatory context, the desired throughput; (ii) AC are applied and documented at each test run, while validation of a method (including the definition of AC) is only performed once; (iii) if AC are altered, then the set of data produced by a method can change. AC, if missing, are the blind spot of quality assurance: Test results may not be reliable and comparable. The establishment and uses of AC will be further detailed in part II of this series.


Assuntos
Disciplinas das Ciências Biológicas , Testes de Toxicidade , Humanos , Projetos de Pesquisa
15.
J Infect Dis ; 228(Suppl 5): S337-S354, 2023 10 03.
Artigo em Inglês | MEDLINE | ID: mdl-37669225

RESUMO

The National Center for Advancing Translational Sciences (NCATS) Assay Guidance Manual (AGM) Workshop on 3D Tissue Models for Antiviral Drug Development, held virtually on 7-8 June 2022, provided comprehensive coverage of critical concepts intended to help scientists establish robust, reproducible, and scalable 3D tissue models to study viruses with pandemic potential. This workshop was organized by NCATS, the National Institute of Allergy and Infectious Diseases, and the Bill and Melinda Gates Foundation. During the workshop, scientific experts from academia, industry, and government provided an overview of 3D tissue models' utility and limitations, use of existing 3D tissue models for antiviral drug development, practical advice, best practices, and case studies about the application of available 3D tissue models to infectious disease modeling. This report includes a summary of each workshop session as well as a discussion of perspectives and challenges related to the use of 3D tissues in antiviral drug discovery.


Assuntos
Antivirais , Descoberta de Drogas , Antivirais/farmacologia , Antivirais/uso terapêutico , Bioensaio
17.
Angew Chem Int Ed Engl ; 62(52): e202306019, 2023 Dec 21.
Artigo em Inglês | MEDLINE | ID: mdl-37610759

RESUMO

In this review the applications of isotopically labeled compounds are discussed and put into the context of their future impact in the life sciences. Especially discussing their use in the pharma and crop science industries to follow their fate in the environment, in vivo or in complex matrices to understand the potential harm of new chemical structures and to increase the safety of human society.


Assuntos
Disciplinas das Ciências Biológicas , Humanos , Pesquisa
18.
EMBO Mol Med ; 15(9): e18208, 2023 09 11.
Artigo em Inglês | MEDLINE | ID: mdl-37538003

RESUMO

Human health is determined both by genetics (G) and environment (E). This is clearly illustrated in groups of individuals who are exposed to the same environmental factor showing differential responses. A quantitative measure of the gene-environment interactions (GxE) effects has not been developed and in some instances, a clear consensus on the concept has not even been reached; for example, whether cancer is predominantly emerging from "bad luck" or "bad lifestyle" is still debated. In this article, we provide a panel of examples of GxE interaction as drivers of pathogenesis. We highlight how epigenetic regulations can represent a common connecting aspect of the molecular bases. Our argument converges on the concept that the GxE is recorded in the cellular epigenome, which might represent the key to deconvolute these multidimensional intricated layers of regulation. Developing a key to decode this epigenetic information would provide quantitative measures of disease risk. Analogously to the epigenetic clock introduced to estimate biological age, we provocatively propose the theoretical concept of an "epigenetic score-meter" to estimate disease risk.


Assuntos
Interação Gene-Ambiente , Neoplasias , Humanos , Epigênese Genética
19.
Environ Int ; 178: 108082, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37422975

RESUMO

The predominantly animal-centric approach of chemical safety assessment has increasingly come under pressure. Society is questioning overall performance, sustainability, continued relevance for human health risk assessment and ethics of this system, demanding a change of paradigm. At the same time, the scientific toolbox used for risk assessment is continuously enriched by the development of "New Approach Methodologies" (NAMs). While this term does not define the age or the state of readiness of the innovation, it covers a wide range of methods, including quantitative structure-activity relationship (QSAR) predictions, high-throughput screening (HTS) bioassays, omics applications, cell cultures, organoids, microphysiological systems (MPS), machine learning models and artificial intelligence (AI). In addition to promising faster and more efficient toxicity testing, NAMs have the potential to fundamentally transform today's regulatory work by allowing more human-relevant decision-making in terms of both hazard and exposure assessment. Yet, several obstacles hamper a broader application of NAMs in current regulatory risk assessment. Constraints in addressing repeated-dose toxicity, with particular reference to the chronic toxicity, and hesitance from relevant stakeholders, are major challenges for the implementation of NAMs in a broader context. Moreover, issues regarding predictivity, reproducibility and quantification need to be addressed and regulatory and legislative frameworks need to be adapted to NAMs. The conceptual perspective presented here has its focus on hazard assessment and is grounded on the main findings and conclusions from a symposium and workshop held in Berlin in November 2021. It intends to provide further insights into how NAMs can be gradually integrated into chemical risk assessment aimed at protection of human health, until eventually the current paradigm is replaced by an animal-free "Next Generation Risk Assessment" (NGRA).


Assuntos
Inteligência Artificial , Testes de Toxicidade , Humanos , Reprodutibilidade dos Testes , Testes de Toxicidade/métodos , Medição de Risco/métodos
20.
ALTEX ; 40(3): 367-388, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37470349

RESUMO

The EU's REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) Regulation requires animal testing only as a last resort. However, our study (Knight et al., 2023) in this issue reveals that approximately 2.9 million animals have been used for REACH testing for reproductive toxicity, developmental toxicity, and repeated-dose toxicity alone as of December 2022. Currently, additional tests requiring about 1.3 million more animals are in the works. As compliance checks continue, more animal tests are anticipated. According to the European Chemicals Agency (ECHA), 75% of read-across methods have been rejected during compliance checks. Here, we estimate that 0.6 to 3.2 million animals have been used for other endpoints, likely at the lower end of this range. The ongoing discussion about the grouping of 4,500 regis-tered petrochemicals can still have a major impact on these numbers. The 2022 amendment of REACH is estimated to add 3.6 to 7.0 million animals. This information comes as the European Parliament is set to consider changes to REACH that could further increase animal testing. Two proposals currently under discussion would likely necessitate new animal testing: extending the requirement for a chemical safety assessment (CSA) to Annex VII substances could add 1.6 to 2.6 million animals, and the registration of polymers adds a challenge comparable to the petrochemical discussion. These findings high-light the importance of understanding the current state of REACH animal testing for the upcoming debate on REACH revisions as an opportunity to focus on reducing animal use.


Assuntos
Alternativas aos Testes com Animais , Testes de Toxicidade , Animais , Alternativas aos Testes com Animais/métodos , Testes de Toxicidade/métodos , Medição de Risco/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...